Measurement Terminology

Whenever discussing a new topic or subject, it is always essential to establish a common language or a group of proper definitions to discuss the topic adequately. The following text goes over some of the more common terms one might encounter when dealing with different measurement systems.

Definition:

  1. Measurand - The physical quantity being measured.

Any measurement system that one might use may be broken done into three main elements. The elements of a generalized measurement system are

  • sensing element

The sensing element of a generalized measurement system is the part of the system in direct contact with the measurement system's measurand.

  • signal modification

The signal modification element of a generalized measurement system is the part of the system that takes the signal from the sensing element and modifies it (typically, it magnifies it). This adjusted signal is used by an individual to utilize the signal in a useful way.

  • indicator

The indicator of a generalized measurement system is the part of the system that allows the individual to use the measurement system to read the measurand's particular value.

The different elements of the generalized measurement system pertain to both analog and digital devices. Consider a glass thermometer. A glass thermometer is constructed of a glass tube with a large bulb on one end. The tube is evacuated, and in its place, a fluid is introduced inside the glass body. Mercury is a common fluid used, but it is often avoided nowadays due to the risk of mercury poisoning. Ethanol is a common alternative to mercury. Once the fluid has been introduced inside the glass body, the opening is sealed. When the thermometer experiences a temperature change, the thermometer's fluid rises and lowers the thermometer's body or stem. The fluid rises and falls due to changes in its density. With the understanding of how a glass thermometer works, the generalized measurement system components may be identified. Shown below in Figure 1 are the three parts of a generalized measurement system of a glass thermometer.

Figure 1: Generalized Measurement System Components

It may be seen that the sensing element is the bulb of the thermometer, the stem or body is the signal modification, and the indicator is the printed scale along the edges.

Definition:

  1. Error - The difference between a measurand's measured value and its true value.

There will always be some deviation between the actual value and the measurements system's output. This deviation is called error. The question then becomes, Is this ok? The answer is yes, as long as the deviation is acceptable for the experiment's intended purpose. Mathematically, error is

$$ error = \mid measured\ value - true\ value \mid $$

Error may be seen graphically on a number line. Error, as represented on a number line, is shown below in Figure 2.

Figure 2: Error Represented On A Number Line

In practice, the true value will never be known, which makes determining the error in a measurement difficult. In this course, we will learn to quantify a measurement's quality by calculating the uncertainty interval or merely the measurement's uncertainty.

Definition:

  1. Systematic Error - An error that occurs every time a measurement is taken.

Systematic error is sometimes called a fixed or bias error. If a system has a bias to it, it repeatedly and reproducibly trends to a particular direction. An example of a systematic error is a bathroom scale that is always $5 lbf$ more than the correct value. The bathroom scale systematically overestimates the weight of the object being measured. Mathematically, systematic error is

$$ systematic\ error = \mid average - true\ value \mid $$

The systematic error of repeated measurements may be seen graphically on a number line, as shown in Figure 3.

Figure 3: Systematic Error of Repeated Measurements

The repeated measurements are shown as the red x's.

Definition:

  1. Random Error - An error caused by the lack of repeatability in a measurement.

Random error is sometimes called a precision error. These are errors that do not occur every time a measurement is taken. They can be due to any number of things. They may be reduced by averaging many measurements, but it cannot be eliminated entirely. Often the random error of repeated measurements is referred to by the range in which they span. The error range is seen graphically below in Figure 4.

Figure 4: Range of Random Error

For a single measurement, the random error is given by

$$ random\ error = \mid specific\ reading - average \mid$$

It should be stressed that this is not the random error of a group of readings but only the individual reading's random error. To determine the random error of a group of readings requires a process that is a bit more involved but will be discussed later in this course.

Common Sources of Systematic Errors

Definition:

  1. Calibration Error - Errors introduced by the calibration of a measurement device.

If you recall, calibration is a test during which known measurand values are applied to the measurement device under specific conditions, and the corresponding output readings are recorded. This type of error can arise from

  • inaccuracy of "known" values

For example, consider the thermistor calibration performed in ENGR/ENGT 120. The "known" values were determined from the digital thermometers. However, those "known" values aren't 100% correct. An error was introduced in assuming that the "known" values were what they were.

  • nonlinearity of the output of the sensor being calibrated

The nonlinearity of the output of a sensor is another source of a calibration error. Consider the plot shown below in Figure 5.

Figure 5: Nonlinear Sensor Output

The red line is the sensor's actual output, while the black line is the sensor's assumed behavior. The use of a trendline is typically when this error is introduced.

  • some other effects

There is any number of places where errors might be introduced into a calibration. Special always must be given whenever calibrations are performed to minimize the introduction of errors.

Definition:

  1. Loading Error - Error due to the premature sampling of a system when the system has experienced a change of some sort.

For an example of loading error, consider placing a cool thermometer in a hot cup of water. Immediately after the thermometer is placed in the hot cup of water, the temperature is read. Will the reading taken from the thermometer be correct? It depends on many different factors, but a standard run of the mill thermometer will time a certain period to adjust to the temperatures that it is experiencing.

Definition:

  1. Spatial Error - Error due to an underlying assumption that there is uniformity spatially.

Consider a bucket of water, as shown below in Figure 6.

Figure 6: Spatial Error Example

There is a resistance heater at the bottom of the bucket, and there are two temperature probes located at $T_1$ and $T_2$. Will $T_1$ and $T_2$ have the same value? It is reasonable to assume that they won't. If only one temperature probe is used, then the assumption that the water is uniformly the same throughout. In practice, this is not the case. An error is being introduced due to this assumption. Various methods may be used to mitigate spatial errors, but often it cannot be eliminated. For the bucket of water example, the simplest solution is probably to stir the water to promote uniformity.

Example Problem 1

Given:

In a calibration test, measurements using a digital voltmeter have been made of a battery voltage known to have an actual voltage of $6.11V$. The readings are:

$5.98V$, $6.05V$, $6.15V$, $6.06V$, $5.99V$, $6.00V$, $6.08V$, $6.03V$ and $6.11V$.

Required:

For the given data

(a) Estimate the systematic error of the voltmeter.

(b) Estimate the maximum random error of the voltmeter.

Solution:

The actual value is defined as the Python variable V_t.

In [1]:
V_t = 6.11 # in V

The different readings are defined as the Python list data.

In [2]:
data = [5.98, 6.05, 6.15, 6.06, 5.99, 6.00, 6.08, 6.03, 6.11] # in V

To do the various computations, we can use the NumPy library. It must first be installed thought. To do so in Thonny, go to Tools > Manage Packages... and then search for NumPy. Installing NumPy uses a similar process to the steps taken to install matplotlib. Once the NumPy library has been installed, it may be imported into the script.

In [3]:
import numpy as np

Here NumPy is imported as the variable np. We can now use the NumPy to calculate the average or mean of the data.

In [4]:
V_avg = np.mean(data)
print(V_avg)
6.050000000000001

You may notice that the value shown below is not exactly as expected. The expected value is 6.05. So why are there all those zeros and then the one behind 6.05? This a consequence of floating point arithmetic. A lot goes into how arithmetic is performed in Python (or on a computer in general). The basics are that any number stored on a computer must be stored as a binary approximation of the number. Since the LSB of the binary number has a finite value, computers cannot represent the full range of numbers (i.e., 6.05 and 6.06 and everything in between). Computers have to approximate numbers (most of the time to a great deal of accuracy). However, these approximations result sometimes in strange arithmetic results. Most of the time, they can be ignored or avoided. For the time being, it will be ignored. The systematic error of the data is then calculated using Python's `abs()` function. The abs() function returns the absolute value of the argument given.

In [5]:
B = abs(V_avg - V_t)
print(B)
0.05999999999999961

B is used here because, as mentioned before, another name for a systematic error is a bias or bias error. Once again, the number is not exactly as expected. This can be remedied by the use of Python's `round()` function. Python's round() function takes one or two arguments. If one argument is given, the argument is rounded to the nearest whole number. If two arguments are given, the first argument is round to the number of decimal places specified by the second argument.

In [6]:
print(round(B, 2)) # answer to part (a)
0.06

Part (a): The systematic error of the data is $0.06 V$.


The maximum random error is found by comparing the average and the two extremes of the data. The min() and max() function may be used to find the two extremes.

In [7]:
E = [abs(min(data)-V_avg), abs(max(data)-V_avg)]
print(E)
[0.07000000000000028, 0.09999999999999964]

Now that the two extremes' random error has been found, the maximum value of these two values is the data's maximum random error.

In [8]:
P = max(E)
print(P)
0.09999999999999964

P is used here because, as mentioned before, another name for a random error is a precision error. The round() function may be used to get the expected value.

In [9]:
print(round(P, 2))
0.1

Part (b): The maximum random error of the data is $0.1 V$.


Discussion:

It should be noted that the maximum random error does not do an adequate job of describing the random error of the system. A single bad reading could skew the results. We'll use statistical methods to handle these problems later in the course.


Definition:

  1. Range - Values of the measurand to which the measuring system will respond adequately.

For example, the instrument shown below in Figure 7 has a range of 40 to 60 cycles.

Figure 7: Tachometer

Definition:

  1. Accuracy - The closeness of the measurement value to the true value.

The accuracy of a device is often specified in terms of the full scale. The full scale is the max input value of the measurement system. For example, a digital voltmeter that is capable of measuring $\pm 20 V$ within $\pm 5%$ of full scale is equivalent to an accuracy of $\pm 1 V$ ($\pm 20V \times \pm 5\% = \pm 1 V$). Accuracy generally includes both residual systematic and random errors. Accuracy can sometimes be misleading, though. Consider the plot of the correct voltage versus the voltmeter output previously mentioned shown below in Figure 8.

Figure 8: Voltmeter Output

Why might the use of the voltmeter in the circled region be unacceptable? In the circled area, the output of the voltmeter is on the order of $1V$, while the accuracy is still $\pm 1V$, which would indicate an accuracy of $100\%$. For this reason, as a general rule of thumb, select instruments where the measurand will fall in the middle to upper portions of the instrument range.

Definition:

  1. Precision - The trait of a measuring system with a small random error.

A precise instrument may not be accurate, and an accurate instrument may not be precise. Precision and accuracy are often used interchangeably, but the terms do have different technical definitions. Consider the various cases shown below in Figure 9.

Figure 9: Accuracy and Precision

In comparison with all the other cases, Case 1 is not accurate and precise. The accuracy and precision are seen by the large systematic error and the large range of random error Case 2 is accurate but not precise, Case 3 is precise but not accurate, and Case 4 is accurate and precise.

Definition:

  1. Resolution - The smallest measurand value that can be determined using the instrument.

The resolution of a device is readily seen in a digital multimeter shown below in Figure 10.

Figure 10: Multimeter Resolution

The multimeter's output occurs in discrete steps (i.e., $128.5V$, $128.6V$, $128.7V$, $128.8V$) and thus has a resolution of $0.1V$.

Definition:

  1. Resolution Error - The error produced by the instrument's inability to follow changes in the measurand strictly.

The resolution error is a type of systematic error.

Definition:

  1. Readability - The ability of how well a non-digital instrument may be read by an operator.

The naked eye can interpolate between graduation marks of an instrument to achieve closer readings than the instrument's designed intent. The interpolation used by an individual may vary from person to person. The best practice is to read the instrument to the closest graduation mark and not try to subdivide or interpolate between marks to produce the most consistent reading. Fun fact: The human eye has trouble reconciling anything below $0.01in$. To illustrate this best practice, consider the tape measure shown below in Figure 11.

Figure 11: Tape Measure Readability

One person might read the arrow's placement on the tape measure as $3\ 11/16$ and a $1/2$ of a sixteenth while another person might read it as $3\ 11/16$ and a $3/4$ of a sixteenth. Which is correct? It is unknown because the tape measure only shows sixteenth increments. For this reason, the reading should be reported as $3\ 11/16$.

Definition:

  1. Repeatability - The ability of an instrument to produce the same output reading for a given measurand.

Repeatability produces a random error known as a repeatability error. Repeatability error may include various factors not caused by the instrument but by uncontrolled parameters in the environment. Examples of environmental factors are solar waves, radio waves, magnetic fields, etc.

Definition:

  1. Linearity - The instrument's ability to produce a linearly proportional output to the measurand experienced.

Linearity is a highly desirable trait of an instrument. However, most devices do not behave linearly. A linearity error is produced when an instrument does not behave linearly. Consider the plot of an instrument's output versus the measurand value as a percentage of the full scale shown below in Figure 12.

Figure 12: Nonlinear Instrument

Shown in red is the actual output of the instrument. In black is the terminal point linear approximation. It may not be easy to tell, but the actual output (the red line) has a slight curve. The maximum linearity error occurs at the midpoint for this instrument but may be located elsewhere for other instruments. The ideal output of the device is shown in blue. The amount that the actual output is off at $0\%$ of the measurand's full scale is the zero offset. A zero offset is a type of systematic error and usually can be eliminated by various techniques. The tachometer shown in Figure 7 is nonlinear. There are sections of the tachometers scale that are linear (i.e., 40 to 45, 45 to 55, and 55 to 60), but the full scale is nonlinear.

Definition:

  1. Sensitivity - The amount of change in the output of an instrument per unit change of the input.

Mathematically, this may be written as

$$ sensitivity = \frac{d\left( output \right)}{d\left( input \right)} \approx \frac{\Delta output}{\Delta input} $$

Accelerometers are often described in terms of their sensitivities. For example, the ADXL335 accelerometer has a sensitivity of $300mV/g$; that is, it changes it's output by $300 mV$ for every $g$ of acceleration experienced. A $g$ of acceleration is equal to the acceleration we experience every day due to Earth's gravitation.

Example Problem 2

Given:

An angular velocity measuring device (tachometer) can measure a mechanical shaft's speed in the range from $0$ to $5000 rpm$. It has an accuracy of $\pm 5\%$ of full scale. You notice that when the shaft speed is zero, the device has an average reading of $200 rpm$.

Required:

What is the maximum error that you might estimate in reading a shaft speed of $3500 rpm$?

Solution:

The full-scale range may be defined as the list R.

In [10]:
R = [0, 5000] # in rpm

The accuracy is defined as the variable Ap.

In [11]:
Ap = 5 # as a percentage

When the shaft speed is zero, the tachometer outputs a value of $200rpm$. This value describes a zero offset and may be defined as the variable Z.

In [12]:
Z = 200 # in rpm

The given speed is defined as the variable S.

In [13]:
S = 3500 # in rpm

The error introduced by the accuracy of the device is given by

In [14]:
A_e = (Ap/100)*(R[1]-R[0])
print(A_e)
250.0

The maximum expected error is then given by

In [15]:
M = A_e + Z
print(M)
450.0

The maximum expected error is $450 rpm$.